Learning multiplication-free linear transformations

نویسندگان

چکیده

In this paper, we propose several dictionary learning algorithms for sparse representations that also impose specific structures on the learned dictionaries such they have low coding complexity and are numerically efficient to use: reduced number of addition/multiplications even avoid multiplications altogether. We base our work factorizations in highly structured basic building blocks (binary orthonormal, scaling, shear transformations) which can write closed-form solutions optimization problems consider. show effectiveness methods image data where compare against well-known transforms as fast Fourier, discrete cosine transforms, dictionaries.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Linear Transformations

We present a polynomial time algorithm to learn (in Valiant's PAC model) cubes in n? space (with general sides-not necessarily axes parallel) given uniformly distributed samples from the cube. In fact, we solve a more general problem of learning in polynomial time linear transformations in n?space. I.e., suppose x is an n?vector whose coordinates are mutually independent random variables with u...

متن کامل

Parsing Linear-Context Free Rewriting Systems with Fast Matrix Multiplication

We describe a recognition algorithm for a subset of binary linear context-free rewriting systems (LCFRS) with running time O(nωd) where M(m) = O(m ) is the running time for m×m matrix multiplication and d is the “contact rank” of the LCFRS—the maximal number of combination and non-combination points that appear in the grammar rules. We also show that this algorithm can be used as a subroutine t...

متن کامل

Multiplication Free Holographic Coding

An empirical informational divergence (relative entropy) between two individual sequences has been introduced in [1]. It has been demonstrated that if the two sequences are independent realizations of two finite-order, finite alphabet, stationary Markov processes, the proposed empirical divergence measure (ZMM), converges to the relative entropy almost surely. This leads to a realization of an ...

متن کامل

Linear Transformations

The n×m matrix AT obtained by exchanging rows and columns of A is called the transpose of A. A matrix A is said to be symmetric if A = AT . The sum of two matrices of equal size is the matrix of the entry-by-entry sums, and the scalar product of a real number a and an m× n matrix A is the m× n matrix of all the entries of A, each multiplied by a. The difference of two matrices of equal size A a...

متن کامل

Deep Learning Made Easier by Linear Transformations in Perceptrons

We transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. This transformation aims at separating the problems of learning the linear and nonlinear parts of the whole input-output mapping, which has many benefits. We study the theoretical propert...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Digital Signal Processing

سال: 2022

ISSN: ['1051-2004', '1095-4333']

DOI: https://doi.org/10.1016/j.dsp.2022.103463